41 research outputs found

    Pediatric non alcoholic fatty liver disease: old and new concepts on development, progression, metabolic insight and potential treatment targets

    Get PDF
    Nonalcoholic fatty liver disease (NAFLD) is the leading cause of chronic liver disease in children. NAFLD has emerged to be extremely prevalent, and predicted by obesity and male gender. It is defined by hepatic fat infiltration >5% hepatocytes, in the absence of other causes of liver pathology. It includes a spectrum of disease ranging from intrahepatic fat accumulation (steatosis) to various degrees of necrotic inflammation and fibrosis (non-alcoholic steatohepatatis [NASH]). NAFLD is associated, in children as in adults, with severe metabolic impairments, determining an increased risk of developing the metabolic syndrome. It can evolve to cirrhosis and hepatocellular carcinoma, with the consequent need for liver transplantation. Both genetic and environmental factors seem to be involved in the development and progression of the disease, but its physiopathology is not yet entirely clear. In view of this mounting epidemic phenomenon involving the youth, the study of NAFLD should be a priority for all health care systems. This review provides an overview of current and new clinical-histological concepts of pediatric NAFLD, going through possible implications into patho-physiolocical and therapeutic perspectives

    Streaming Algorithms for Subspace Analysis: Comparative Review and Implementation on IoT Devices

    Get PDF
    Subspace analysis is a widely used technique for coping with high-dimensional data and is becoming a fundamental step in the early treatment of many signal processing tasks. However, traditional subspace analysis often requires a large amount of memory and computational resources, as it is equivalent to eigenspace determination. To address this issue, specialized streaming algorithms have been developed, allowing subspace analysis to be run on low-power devices such as sensors or edge devices. Here, we present a classification and a comparison of these methods by providing a consistent description and highlighting their features and similarities. We also evaluate their performance in the task of subspace identification with a focus on computational complexity and memory footprint for different signal dimensions. Additionally, we test the implementation of these algorithms on common hardware platforms typically employed for sensors and edge devices

    Low-power fixed-point compressed sensing decoder with support oracle

    Get PDF
    Approaches for reconstructing signals encoded with Compressed Sensing (CS) techniques, and based on Deep Neural Networks (DNNs) are receiving increasing interest in the literature. In a recent work, a new DNN-based method named Trained CS with Support Oracle (TCSSO) is introduced, relying the signal reconstruction on the two separate tasks of support identification and measurements decoding. The aim of this paper is to improve the TCSSO framework by considering actual implementations using a finite-precision hardware. Solutions with low memory footprint and low computation requirements by employing fixed-point notation and by reducing the number of bits employed are considered. Results using synthetic electrocardiogram (ECG) signals as a case study show that this approach, even when used in a constrained-resources scenario, still outperform current state-of-art CS approaches

    Deep Neural Oracles for Short-Window Optimized Compressed Sensing of Biosignals

    Get PDF
    The recovery of sparse signals given their linear mapping on lower-dimensional spaces can be partitioned into a support estimation phase and a coefficient estimation phase. We propose to estimate the support with an oracle based on a deep neural network trained jointly with the linear mapping at the encoder. The divination of the oracle is then used to estimate the coefficients by pseudo-inversion. This architecture allows the definition of an encoding-decoding scheme with state-of-the-art recovery capabilities when applied to biological signals such as ECG and EEG, thus allowing extremely low-complex encoders. As an additional feature, oracle-based recovery is able to self-assess, by indicating with remarkable accuracy chunks of signals that may have been reconstructed with a non-satisfactory quality. This self-assessment capability is unique in the CS literature and paves the way for further improvements depending on the requirements of the specific application. As an example, our scheme is able to satisfyingly compress by a factor of 2.67 an ECG or EEG signal with a complexity equivalent to only 24 signed sums per processed sample

    Plantar pain is not always fasciitis

    Get PDF
    The case is described of a patient with chronic plantar pain, diagnosed as fasciitis, which was not improved by conventional treatment. Magnetic resonance imaging revealed flexor hallucis longus tenosynovitis, which improved after local glucocorticoid injection

    Event-based Classification with Recurrent Spiking Neural Networks on Low-end Micro-Controller Units

    Get PDF
    Due to its intrinsic sparsity both in time and space, event-based data is optimally suited for edge-computing applications that require low power and low latency. Time varying signals encoded with this data representation are best processed with Spiking Neural Networks (SNN). In particular, recurrent SNNs (RSNNs) can solve temporal tasks using a relatively low number of parameters, and therefore support their hardware implementation in resource-constrained computing architectures. These premises propel the need of exploring the properties of these kinds of structures on low-power processing systems to test their limits both in terms of computational accuracy and resource consumption, without having to resort to full-custom implementations. In this work, we implemented an RSNN model on a low-end, resource-constrained ARM-Cortex-M4-based Micro Controller Unit (MCU). We trained it on a down-sampled version of the N-MNIST event-based dataset for digit recognition as an example to assess its performance in the inference phase. With an accuracy of 97.2%, the implementation has an average energy consumption as low as 4.1μJ and a worst-case computational time of 150.4μs per time-step with an operating frequency of 180 MHz, so the deployment of RSNNs on MCU devices is a feasible option for small image vision real-time tasks

    Online Monitoring of the Osiris Reactor with the Nucifer Neutrino Detector

    Full text link
    Originally designed as a new nuclear reactor monitoring device, the Nucifer detector has successfully detected its first neutrinos. We provide the second shortest baseline measurement of the reactor neutrino flux. The detection of electron antineutrinos emitted in the decay chains of the fission products, combined with reactor core simulations, provides an new tool to assess both the thermal power and the fissile content of the whole nuclear core and could be used by the Inter- national Agency for Atomic Energy (IAEA) to enhance the Safeguards of civil nuclear reactors. Deployed at only 7.2m away from the compact Osiris research reactor core (70MW) operating at the Saclay research centre of the French Alternative Energies and Atomic Energy Commission (CEA), the experiment also exhibits a well-suited configuration to search for a new short baseline oscillation. We report the first results of the Nucifer experiment, describing the performances of the 0.85m3 detector remotely operating at a shallow depth equivalent to 12m of water and under intense background radiation conditions. Based on 145 (106) days of data with reactor ON (OFF), leading to the detection of an estimated 40760 electron antineutrinos, the mean number of detected antineutrinos is 281 +- 7(stat) +- 18(syst) electron antineutrinos/day, in agreement with the prediction 277(23) electron antineutrinos/day. Due the the large background no conclusive results on the existence of light sterile neutrinos could be derived, however. As a first societal application we quantify how antineutrinos could be used for the Plutonium Management and Disposition Agreement.Comment: 22 pages, 16 figures - Version

    ANTARES: the first undersea neutrino telescope

    Get PDF
    The ANTARES Neutrino Telescope was completed in May 2008 and is the first operational Neutrino Telescope in the Mediterranean Sea. The main purpose of the detector is to perform neutrino astronomy and the apparatus also offers facilities for marine and Earth sciences. This paper describes the design, the construction and the installation of the telescope in the deep sea, offshore from Toulon in France. An illustration of the detector performance is given

    The ALICE experiment at the CERN LHC

    Get PDF
    ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 161626 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008
    corecore